1,773 research outputs found

    On universal oracle inequalities related to high-dimensional linear models

    Full text link
    This paper deals with recovering an unknown vector θ\theta from the noisy data Y=Aθ+σξY=A\theta+\sigma\xi, where AA is a known (m×n)(m\times n)-matrix and ξ\xi is a white Gaussian noise. It is assumed that nn is large and AA may be severely ill-posed. Therefore, in order to estimate θ\theta, a spectral regularization method is used, and our goal is to choose its regularization parameter with the help of the data YY. For spectral regularization methods related to the so-called ordered smoothers [see Kneip Ann. Statist. 22 (1994) 835--866], we propose new penalties in the principle of empirical risk minimization. The heuristical idea behind these penalties is related to balancing excess risks. Based on this approach, we derive a sharp oracle inequality controlling the mean square risks of data-driven spectral regularization methods.Comment: Published in at http://dx.doi.org/10.1214/10-AOS803 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Testing Monotonicity of Pricing Kernels

    Get PDF
    The behaviour of market agents has always been extensively covered in the literature. Risk averse behaviour, described by von Neumann and Morgenstern (1944) via a concave utility function, is considered to be a cornerstone of classical economics. Agents prefer a fixed profit over uncertain choice with the same expected value, however lately there has been a lot of discussion about the reliability of this approach. Some authors have shown that there is a reference point where market utility functions are convex. In this paper we have constructed a test to verify uncertainty about the concavity of agents’ utility function by testing the monotonicity of empirical pricing kernels (EPKs). A monotone decreasing EPK corresponds to a concave utility function while non-monotone decreasing EPK means non-averse pattern on one or more intervals of the utility function. We investigated the EPK for German DAX data for years 2000, 2002 and 2004 and found the evidence of non-concave utility functions: H0 hypothesis of monotone decreasing pricing kernel was rejected at 5% and 10% significance level in 2002 and at 10% significance level in 2000.Risk Aversion, Pricing kernel

    Exponential bounds for minimum contrast estimators

    Full text link
    The paper focuses on general properties of parametric minimum contrast estimators. The quality of estimation is measured in terms of the rate function related to the contrast, thus allowing to derive exponential risk bounds invariant with respect to the detailed probabilistic structure of the model. This approach works well for small or moderate samples and covers the case of a misspecified parametric model. Another important feature of the presented bounds is that they may be used in the case when the parametric set is unbounded and non-compact. These bounds do not rely on the entropy or covering numbers and can be easily computed. The most important statistical fact resulting from the exponential bonds is a concentration inequality which claims that minimum contrast estimators concentrate with a large probability on the level set of the rate function. In typical situations, every such set is a root-n neighborhood of the parameter of interest. We also show that the obtained bounds can help for bounding the estimation risk, constructing confidence sets for the underlying parameters. Our general results are illustrated for the case of an i.i.d. sample. We also consider several popular examples including least absolute deviation estimation and the problem of estimating the location of a change point. What we obtain in these examples slightly differs from the usual asymptotic results presented in statistical literature. This difference is due to the unboundness of the parameter set and a possible model misspecification.Comment: Submitted to the Electronic Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Exponential bounds for the minimum contrast with some applications

    Get PDF
    The paper studies parametric minimum contrast estimates under rather general conditions. The quality if estimation is measured by the rate function related to the contrast which allows for stating the results without specifying the particular parametric structure of the model. This approach permits also to go far beyond the classical i.i.d. case and to obtain nonasymptotic upper bounds for the risk. These bounds apply even for small or moderate samples. They also cover the case of misspecified parametric models. Another important feature of the approach is that it works well in the case when the parametric set can be unbounded and non-compact. In the case of a smooth contrast, the obtained exponential bounds do not rely on the covering numbers and can be easily computed. We also illustrate how these bound can be used for statistical inference: bounding the estimation risk, constructing the confidence sets for the underlying parameters, establishing the concentration properties of the minimum contrast estimate. The general results are specified to the case of a Gaussian contrast and of an i.i.d. sample. We also illustrate the approach by several popular examples including least squares and least absolute deviation contrasts and the problem of estimating the location of the change point. What we obtain in these examples slightly differs from usual asymptotic results known in the classical literature. This difference is due to the unboundness of the parameter set and a possible model misspecification

    On robust stopping times for detecting changes in distribution

    Get PDF
    Let X1, X2,...be independent random variables observed sequen-tially and such that X1,...,X θ−1 have a common probability density p 0, while X θ ,X θ +1,...are all distributed according to p 1 6 = p 0. It is assumed that p 0 and p 1 are known, but the time change θ ∈ Z + is unknown and the goal is to construct a stopping time τ that detects the hange-point θ as soon as possible. The existing approaches to this problem rely essentially on some a priori information about θ. For in-stance, in Bayes approaches, it is assumed that θ is a random variable with a known probability distribution. In methods related to hypothesis testing, this a priori information is hidden in the so-called verage run length. The main goal in this paper is to construct stopping times which do not make use of a priorinformation about θ, but have nearly Bayesian detection delays. More precisely, we propose stopping times solving approximately the following problem: ∆ (θ;τα) → min τα subject to α (θ;τα) ≤ α for any θ ≥ 1, where α (θ; τ ) = P θ { τ < θ} is the false alarm probability and ∆(θ;τ) = E θ (τ−θ) + is the average detection delay, and explain why such top-ping times are robust w.r.t. a priori information about θ
    • …
    corecore